156 research outputs found

    Parallel Search with no Coordination

    Get PDF
    We consider a parallel version of a classical Bayesian search problem. kk agents are looking for a treasure that is placed in one of the boxes indexed by N+\mathbb{N}^+ according to a known distribution pp. The aim is to minimize the expected time until the first agent finds it. Searchers run in parallel where at each time step each searcher can "peek" into a box. A basic family of algorithms which are inherently robust is \emph{non-coordinating} algorithms. Such algorithms act independently at each searcher, differing only by their probabilistic choices. We are interested in the price incurred by employing such algorithms when compared with the case of full coordination. We first show that there exists a non-coordination algorithm, that knowing only the relative likelihood of boxes according to pp, has expected running time of at most 10+4(1+1k)2T10+4(1+\frac{1}{k})^2 T, where TT is the expected running time of the best fully coordinated algorithm. This result is obtained by applying a refined version of the main algorithm suggested by Fraigniaud, Korman and Rodeh in STOC'16, which was designed for the context of linear parallel search.We then describe an optimal non-coordinating algorithm for the case where the distribution pp is known. The running time of this algorithm is difficult to analyse in general, but we calculate it for several examples. In the case where pp is uniform over a finite set of boxes, then the algorithm just checks boxes uniformly at random among all non-checked boxes and is essentially 22 times worse than the coordinating algorithm.We also show simple algorithms for Pareto distributions over MM boxes. That is, in the case where p(x)∼1/xbp(x) \sim 1/x^b for 0<b<10< b < 1, we suggest the following algorithm: at step tt choose uniformly from the boxes unchecked in 1,...,min(M,⌊t/σ⌋){1, . . . ,min(M, \lfloor t/\sigma\rfloor)}, where σ=b/(b+k−1)\sigma = b/(b + k - 1). It turns out this algorithm is asymptotically optimal, and runs about 2+b2+b times worse than the case of full coordination

    Fast Two-Robot Disk Evacuation with Wireless Communication

    Get PDF
    In the fast evacuation problem, we study the path planning problem for two robots who want to minimize the worst-case evacuation time on the unit disk. The robots are initially placed at the center of the disk. In order to evacuate, they need to reach an unknown point, the exit, on the boundary of the disk. Once one of the robots finds the exit, it will instantaneously notify the other agent, who will make a beeline to it. The problem has been studied for robots with the same speed~\cite{s1}. We study a more general case where one robot has speed 11 and the other has speed s≥1s \geq 1. We provide optimal evacuation strategies in the case that s≥c2.75≈2.75s \geq c_{2.75} \approx 2.75 by showing matching upper and lower bounds on the worst-case evacuation time. For 1≤s<c2.751\leq s < c_{2.75}, we show (non-matching) upper and lower bounds on the evacuation time with a ratio less than 1.221.22. Moreover, we demonstrate that a generalization of the two-robot search strategy from~\cite{s1} is outperformed by our proposed strategies for any s≥c1.71≈1.71s \geq c_{1.71} \approx 1.71.Comment: 18 pages, 10 figure

    Evacuating Two Robots from a Disk: A Second Cut

    Full text link
    We present an improved algorithm for the problem of evacuating two robots from the unit disk via an unknown exit on the boundary. Robots start at the center of the disk, move at unit speed, and can only communicate locally. Our algorithm improves previous results by Brandt et al. [CIAC'17] by introducing a second detour through the interior of the disk. This allows for an improved evacuation time of 5.62345.6234. The best known lower bound of 5.2555.255 was shown by Czyzowicz et al. [CIAC'15].Comment: 19 pages, 5 figures. This is the full version of the paper with the same title accepted in the 26th International Colloquium on Structural Information and Communication Complexity (SIROCCO'19

    Network Analysis of Biochemical Logic for Noise Reduction and Stability: A System of Three Coupled Enzymatic AND Gates

    Full text link
    We develop an approach aimed at optimizing the parameters of a network of biochemical logic gates for reduction of the "analog" noise buildup. Experiments for three coupled enzymatic AND gates are reported, illustrating our procedure. Specifically, starch - one of the controlled network inputs - is converted to maltose by beta-amylase. With the use of phosphate (another controlled input), maltose phosphorylase then produces glucose. Finally, nicotinamide adenine dinucleotide (NAD+) - the third controlled input - is reduced under the action of glucose dehydrogenase to yield the optically detected signal. Network functioning is analyzed by varying selective inputs and fitting standardized few-parameters "response-surface" functions assumed for each gate. This allows a certain probe of the individual gate quality, but primarily yields information on the relative contribution of the gates to noise amplification. The derived information is then used to modify our experimental system to put it in a regime of a less noisy operation.Comment: 31 pages, PD

    Rendezvous on a Line by Location-Aware Robots Despite the Presence of Byzantine Faults

    Full text link
    A set of mobile robots is placed at points of an infinite line. The robots are equipped with GPS devices and they may communicate their positions on the line to a central authority. The collection contains an unknown subset of "spies", i.e., byzantine robots, which are indistinguishable from the non-faulty ones. The set of the non-faulty robots need to rendezvous in the shortest possible time in order to perform some task, while the byzantine robots may try to delay their rendezvous for as long as possible. The problem facing a central authority is to determine trajectories for all robots so as to minimize the time until the non-faulty robots have rendezvoused. The trajectories must be determined without knowledge of which robots are faulty. Our goal is to minimize the competitive ratio between the time required to achieve the first rendezvous of the non-faulty robots and the time required for such a rendezvous to occur under the assumption that the faulty robots are known at the start. We provide a bounded competitive ratio algorithm, where the central authority is informed only of the set of initial robot positions, without knowing which ones or how many of them are faulty. When an upper bound on the number of byzantine robots is known to the central authority, we provide algorithms with better competitive ratios. In some instances we are able to show these algorithms are optimal

    Density-dependence of functional development in spiking cortical networks grown in vitro

    Full text link
    During development, the mammalian brain differentiates into specialized regions with distinct functional abilities. While many factors contribute to functional specialization, we explore the effect of neuronal density on the development of neuronal interactions in vitro. Two types of cortical networks, dense and sparse, with 50,000 and 12,000 total cells respectively, are studied. Activation graphs that represent pairwise neuronal interactions are constructed using a competitive first response model. These graphs reveal that, during development in vitro, dense networks form activation connections earlier than sparse networks. Link entropy analysis of dense net- work activation graphs suggests that the majority of connections between electrodes are reciprocal in nature. Information theoretic measures reveal that early functional information interactions (among 3 cells) are synergetic in both dense and sparse networks. However, during later stages of development, previously synergetic relationships become primarily redundant in dense, but not in sparse networks. Large link entropy values in the activation graph are related to the domination of redundant ensembles in late stages of development in dense networks. Results demonstrate differences between dense and sparse networks in terms of informational groups, pairwise relationships, and activation graphs. These differences suggest that variations in cell density may result in different functional specialization of nervous system tissue in vivo.Comment: 10 pages, 7 figure

    Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems

    Get PDF
    A generic mechanism - networked buffering - is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems. \ud \u

    Exploring the Contextual Sensitivity of Factors that Determine Cell-to-Cell Variability in Receptor-Mediated Apoptosis

    Get PDF
    Stochastic fluctuations in gene expression give rise to cell-to-cell variability in protein levels which can potentially cause variability in cellular phenotype. For TRAIL (TNF-related apoptosis-inducing ligand) variability manifests itself as dramatic differences in the time between ligand exposure and the sudden activation of the effector caspases that kill cells. However, the contribution of individual proteins to phenotypic variability has not been explored in detail. In this paper we use feature-based sensitivity analysis as a means to estimate the impact of variation in key apoptosis regulators on variability in the dynamics of cell death. We use Monte Carlo sampling from measured protein concentration distributions in combination with a previously validated ordinary differential equation model of apoptosis to simulate the dynamics of receptor-mediated apoptosis. We find that variation in the concentrations of some proteins matters much more than variation in others and that precisely which proteins matter depends both on the concentrations of other proteins and on whether correlations in protein levels are taken into account. A prediction from simulation that we confirm experimentally is that variability in fate is sensitive to even small increases in the levels of Bcl-2. We also show that sensitivity to Bcl-2 levels is itself sensitive to the levels of interacting proteins. The contextual dependency is implicit in the mathematical formulation of sensitivity, but our data show that it is also important for biologically relevant parameter values. Our work provides a conceptual and practical means to study and understand the impact of cell-to-cell variability in protein expression levels on cell fate using deterministic models and sampling from parameter distributions

    Stochastic Responses May Allow Genetically Diverse Cell Populations to Optimize Performance with Simpler Signaling Networks

    Get PDF
    Two theories have emerged for the role that stochasticity plays in biological responses: first, that it degrades biological responses, so the performance of biological signaling machinery could be improved by increasing molecular copy numbers of key proteins; second, that it enhances biological performance, by enabling diversification of population-level responses. Using T cell biology as an example, we demonstrate that these roles for stochastic responses are not sufficient to understand experimental observations of stochastic response in complex biological systems that utilize environmental and genetic diversity to make cooperative responses. We propose a new role for stochastic responses in biology: they enable populations to make complex responses with simpler biochemical signaling machinery than would be required in the absence of stochasticity. Thus, the evolution of stochastic responses may be linked to the evolvability of different signaling machineries.National Institutes of Health (U.S.). Pioneer Awar

    Biophysics across time and space

    Get PDF
    Understanding the behaviour of almost any biological object is a fundamentally multiscale problem — a challenge that biophysicists have been increasingly embracing, building on two centuries of biophysical studies at a variety of length scales
    • …
    corecore